68 research outputs found

    Generalized Extreme Value Regression for Binary Rare Events Data: an Application to Credit Defaults

    Get PDF
    The most used regression model with binary dependent variable is the logistic regression model. When the dependent variable represents a rare event, the logistic regression model shows relevant drawbacks. In order to overcome these drawbacks we propose the Generalized Extreme Value (GEV) regression model. In particular, in a Generalized Linear Model (GLM) with binary dependent variable we suggest the quantile function of the GEV distribution as link function, so our attention is focused on the tail of the response curve for values close to one. The estimation procedure is the maximum likelihood method. This model accommodates skewness and it presents a generalization of GLMs with log-log link function. In credit risk analysis a pivotal topic is the default probability estimation. Since defaults are rare events, we apply the GEV regression to empirical data on Italian Small and Medium Enterprises (SMEs) to model their default probabilities.

    Bankruptcy Prediction of Small and Medium Enterprises Using a Flexible Binary Generalized Extreme Value Model

    Full text link
    We introduce a binary regression accounting-based model for bankruptcy prediction of small and medium enterprises (SMEs). The main advantage of the model lies in its predictive performance in identifying defaulted SMEs. Another advantage, which is especially relevant for banks, is that the relationship between the accounting characteristics of SMEs and response is not assumed a priori (e.g., linear, quadratic or cubic) and can be determined from the data. The proposed approach uses the quantile function of the generalized extreme value distribution as link function as well as smooth functions of accounting characteristics to flexibly model covariate effects. Therefore, the usual assumptions in scoring models of symmetric link function and linear or pre-specied covariate-response relationships are relaxed. Out-of-sample and out-of-time validation on Italian data shows that our proposal outperforms the commonly used (logistic) scoring model for different default horizons

    A goodness-of-fit test for maximum order statistics from discrete distributions

    Get PDF
    In economic, financial and environmental sciences studies the extreme value theory is used for the evaluation of several complex occurring phe- nomena, e.g., risk management theory, natural calamities, meteorology and pollution studies. When the observed values are discrete, like count mea- surements, the discrete extreme value distributions should be applied. In this paper we propose a procedure to evaluate the goodness of fit of ex- treme values from discrete distributions. In particular we modify the classic statistic of the Kolmogorov-Smirnov goodness of fit test for continuous dis- tribution function. This modification is necessary since the assumption of the Kolmogorov-Smirnov test is the continuity of the distribution specified under the null hypothesis. The distribution of the proposed test is given. The exact critical values of the test statistic are tabulated for extreme values from some specific discrete distributions. An application in environmental science is presented

    A new approach to measure systemic risk:A bivariate copula model for dependent censored data

    Get PDF
    We propose a novel approach based on the Marshall-Olkin (MO) copula to estimate the impact of systematic and idiosyncratic components on cross-border systemic risk. To use the data on non-failed banks in the suggested method, we consider the time to bank failure as a censored variable. Therefore, we propose a pseudo-maximum likelihood estimation procedure for the MO copula for a Type I censored sample. We derive the log-likelihood function, the copula parameter estimator and the bootstrap confidence intervals. Empirical data on the banking system of three European countries (Germany, Italy and the UK) shows that the proposed censored model can accurately estimate the systematic component of cross-border systemic risk. (C) 2019 Elsevier B.V. All rights reserved

    The effectiveness of TARP-CPP on the US banking industry: A new copula-based approach

    Get PDF
    Following the 2008 financial crisis, regulatory authorities and governments provided distressed banks with equity infusions in order to strengthen national banking systems. However, the effectiveness of these interventions for financial stability has not been extensively researched in the literature. In order to understand the effectiveness of these bailouts for the solvency of banks this paper proposes a new model: the Longitudinal Binary Generalised Extreme Value (LOBGEV) model. Differing from the existing models, the LOBGEV model allows us to analyse the temporal structure of the probability of failure for banks, for both those that received a bailout and for those that did not. In particular, it encompasses both the flexibility of the D-vine copula and the accuracy of the generalised extreme value model in estimating the probability of bank failure and of banks receiving approval for capital injection. We apply this new model to the US banking system from 2008 to 2013 in order to investigate how and to what extent the Troubled Asset Relief Program (TARP)- Capital Purchase Program (CPP) reduced the probability of the failure of commercial banks. We specifically identify a set of macroeconomic and bank-specific factors that affect the probability of bank failure for TARP-CCP recipients and for those that did not receive capital under TARP-CCP. Our results suggest that TARP-CPP provided only short-term relief for US commercial banks

    A comparative analysis of the UK and Italian small businesses using Generalised Extreme Value models

    Get PDF
    This paper presents a cross-country comparison of significant predictors of small business failure between Italy and the UK. Financial measures of profitability, leverage, coverage, liquidity, scale and non-financial information are explored, some commonalities and differences are highlighted. Several models are considered, starting with the logistic regression which is a standard approach in credit risk modelling. Some important improvements are investigated. Generalised Extreme Value (GEV) regression is applied in contrast to the logistic regression in order to produce more conservative estimates of default probability. The assumption of non-linearity is relaxed through application of BGEVA, non-parametric additive model based on the GEV link function. Two methods of handling missing values are compared: multiple imputation and Weights of Evidence (WoE) transformation. The results suggest that the best predictive performance is obtained by BGEVA, thus implying the necessity of taking into account the low volume of defaults and non-linear patterns when modelling SME performance. WoE for the majority of models considered show better prediction as compared to multiple imputation, suggesting that missing values could be informative

    A goodness-of-fit test for maximum order statistics from discrete distributions

    Get PDF
    In economic, financial and environmental sciences studies the extreme value theory is used for the evaluation of several complex occurring phe- nomena, e.g., risk management theory, natural calamities, meteorology and pollution studies. When the observed values are discrete, like count mea- surements, the discrete extreme value distributions should be applied. In this paper we propose a procedure to evaluate the goodness of fit of ex- treme values from discrete distributions. In particular we modify the classic statistic of the Kolmogorov-Smirnov goodness of fit test for continuous dis- tribution function. This modification is necessary since the assumption of the Kolmogorov-Smirnov test is the continuity of the distribution specified under the null hypothesis. The distribution of the proposed test is given. The exact critical values of the test statistic are tabulated for extreme values from some specific discrete distributions. An application in environmental science is presented

    Modelling the dependence in multivariate longitudinal data by pair copula decomposition

    No full text
    A new flexible way of modeling the dependence between the components of non-normal multivariate longitudinal-data is proposed by using the copula approach. The presence of longitudinal data is increasing in the scientific areas where several variables are measured over a sample of statistical units at different times, showing two types of dependence: between variables and across time. In order to account both type of dependence the proposed model considers two levels of analysis. First given a specific time, we model the relations of variables using copula. The use of the copula allows us to relax the assumption of normality. In the second level, each longitudinal series, corresponding to a given response over time, is modelled separately using a pair copula decomposition to relate the distributions of the variables describing the observation taken in different times. The use of the pair copula decomposition allows us to overcome the problem of the multivariate copulae used in the literature which suffer from rather inflexible structures in high dimension. The result is a new extreme flexible multivariate longitudinal model, which overcomes the problem of modelling simultaneous dependence between two or more non-normal time-series
    corecore